133 research outputs found

    Response Time Approximations in Fork-Join Queues

    No full text
    Fork-join queueing networks model a network of parallel servers in which an arriving job splits into a number of subtasks that are serviced in parallel. Fork-join queues can be used to model disk arrays. A response time approximation of the fork-join queue is presented that attempts to comply with the additional constraints of modelling a disk array. This approximation is compared with existing analytical approximations of the fork-join queueing network

    State-Space Size Estimation By Least-Squares Fitting

    No full text
    We present a method for estimating the number of states in the continuous time Markov chains (CTMCs) underlying high-level models using least-squares fitting. Our work improves on existing techniques by producing a numerical estimate of the number of states rather than classifying the state space into on of three types. We demonstrate the practicality and accuracy of our approach on a number of CTMCs generated from three Generalised Stochastic Petri Net (GSPN) models with up to 11 million states

    Towards Safer Smart Contracts: A Survey of Languages and Verification Methods

    Get PDF
    With a market capitalisation of over USD 205 billion in just under ten years, public distributed ledgers have experienced significant adoption. Apart from novel consensus mechanisms, their success is also accountable to smart contracts. These programs allow distrusting parties to enter agreements that are executed autonomously. However, implementation issues in smart contracts caused severe losses to the users of such contracts. Significant efforts are taken to improve their security by introducing new programming languages and advance verification methods. We provide a survey of those efforts in two parts. First, we introduce several smart contract languages focussing on security features. To that end, we present an overview concerning paradigm, type, instruction set, semantics, and metering. Second, we examine verification tools and methods for smart contract and distributed ledgers. Accordingly, we introduce their verification approach, level of automation, coverage, and supported languages. Last, we present future research directions including formal semantics, verified compilers, and automated verification

    Self-adaptive containers: interoperability extensions and cloud integration

    Get PDF
    Driven by an ever-increasing diversity of application contexts, execution environments and scalability requirements, modern software is faced with the challenge of frequent code refactoring. To address this, we have proposed an STL-like self-adaptive container library, which dynamically changes its data structures and resource usage to meet programmer-specified Service Level Objectives relating to performance, reliability and primary memory use. A prototype of this library has been implemented and utilised in two case studies to prove its viability. In the present work, we explore a low-cost means to extend our library to satisfy wider classes of Service Level Objectives. This is achieved through the integration of third-party container frameworks, which exploit parallelism to boost performance and disk-based data offloading to reduce primary memory consumption, and the integration of cloud storage services, which offer cost-effective location-free storage. We demonstrate our library's application in a state-space exploration case study. With very low programmer overhead, experimental results show that our library can improve performance with a 76% reduction in insertion time and an 86% reduction in search time, and can also exploit out-of-core storage, including cloud storage

    CloudScope: diagnosing and managing performance interference in multi-tenant clouds

    Get PDF
    © 2015 IEEE.Virtual machine consolidation is attractive in cloud computing platforms for several reasons including reduced infrastructure costs, lower energy consumption and ease of management. However, the interference between co-resident workloads caused by virtualization can violate the service level objectives (SLOs) that the cloud platform guarantees. Existing solutions to minimize interference between virtual machines (VMs) are mostly based on comprehensive micro-benchmarks or online training which makes them computationally intensive. In this paper, we present CloudScope, a system for diagnosing interference for multi-tenant cloud systems in a lightweight way. CloudScope employs a discrete-time Markov Chain model for the online prediction of performance interference of co-resident VMs. It uses the results to optimally (re)assign VMs to physical machines and to optimize the hypervisor configuration, e.g. the CPU share it can use, for different workloads. We have implemented CloudScope on top of the Xen hypervisor and conducted experiments using a set of CPU, disk, and network intensive workloads and a real system (MapReduce). Our results show that CloudScope interference prediction achieves an average error of 9%. The interference-aware scheduler improves VM performance by up to 10% compared to the default scheduler. In addition, the hypervisor reconfiguration can improve network throughput by up to 30%

    Validation of Large Zoned RAID Systems

    No full text
    Building on our prior work we present an improved model for for large partial stripe following full stripe writes in RAID 5. This was necessary because we observed that our previous model tended to underestimate measured results. To date, we have only validated these models against RAID systems with at most four disks. Here we validate our improved model, and also our existing models for other read and write configurations, against measurements taken from an eight disk RAID array

    Response Time Densities in Generalised Stochastic Petri Net Models.

    No full text
    Generalised Stochastic Petri nets (GSPNs) have been widely used to analyse the performance of hardware and software systems. This paper presents a novel technique for the numerical determination of response time densities in GSPN models. The technique places no structural restrictions on the models that can be analysed, and allows for the high-level specification of multiple source and destination markings, including any combination of tangible and vanishing markings. The technique is implemented using a scalable parallel Laplace transform inverter that employs a modified Laguerre inversion technique. We present numerical results, including a study of the full distribution of end-to-end response time in a GSPN model of the Courier communication protocol software. The numerical results are validated against simulation. 1

    Thinking the GOAT: imitating tennis styles

    Get PDF
    A tactically aware coach is key to improving tennis players’ games; a coach analyses past matches with two considerations in mind: 1) the style of the player and how that style translates to real-world shot-making, and 2) the intent of a shot, irrespective of the outcome. Modern Hawk-Eye technology deployed in top-tier tournaments has enabled deeper analysis of professional matches than ever before. The aim of this paper is to emulate and augment the qualities of great coaches using data collected by Hawk-Eye; we develop a deep learning approach to imitate tennis players’ responses, to learn individual player styles efficiently, and we demonstrate this using performance metrics and illustrations

    Approximate Queueing Network Analysis of Patient Treatment Times

    No full text
    We develop an approximate generating function analysis (AGFA) technique which approximates the Laplace transform of the probability density function of customer response time in networks of queues with class-based priorities. From the approximated Laplace transform, we derive the first two moments of customer response time. This technique is applied to a model of a large hospitals Accident and Emergency department for which we obtain the mean and standard deviation of total patient service time. We experiment with different patient-handling priority schemes and compare the AGFA moments with the results from a discrete event simulation. Copyright 2007 ICST
    • …
    corecore